56 research outputs found

    Objective Classes for Micro-Facial Expression Recognition

    Full text link
    Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP, HOOF and HOG 3D feature descriptors. The experiments are evaluated on two benchmark FACS coded datasets: CASME II and SAMM. The best result achieves 86.35\% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.Comment: 11 pages, 4 figures and 5 tables. This paper will be submitted for journal revie

    3D-CNN for Facial Micro- and Macro-expression Spotting on Long Video Sequences using Temporal Oriented Reference Frame

    Full text link
    Facial expression spotting is the preliminary step for micro- and macro-expression analysis. The task of reliably spotting such expressions in video sequences is currently unsolved. The current best systems depend upon optical flow methods to extract regional motion features, before categorisation of that motion into a specific class of facial movement. Optical flow is susceptible to drift error, which introduces a serious problem for motions with long-term dependencies, such as high frame-rate macro-expression. We propose a purely deep learning solution which, rather than track frame differential motion, compares via a convolutional model, each frame with two temporally local reference frames. Reference frames are sampled according to calculated micro- and macro-expression durations. We show that our solution achieves state-of-the-art performance (F1-score of 0.126) in a dataset of high frame-rate (200 fps) long video sequences (SAMM-LV) and is competitive in a low frame-rate (30 fps) dataset (CAS(ME)2). In this paper, we document our deep learning model and parameters, including how we use local contrast normalisation, which we show is critical for optimal results. We surpass a limitation in existing methods, and advance the state of deep learning in the domain of facial expression spotting

    Self-supervised spontaneous latent-based facial expression sequence generation

    Get PDF
    In this paper, we investigate the spontaneity issue in facial expression sequence generation. Current leading methods in the field are commonly reliant on manually adjusted conditional variables to direct the model to generate a specific class of expression. We propose a neural network-based method which uses Gaussian noise to model spontaneity in the generation process, removing the need for manual control of conditional generation variables. Our model takes two sequential images as input, with additive noise, and produces the next image in the sequence. We trained two types of models: single-expression, and mixed-expression. With single-expression, unique facial movements of certain emotion class can be generated; with mixed expressions, fully spontaneous expression sequence generation can be achieved. We compared our method to current leading generation methods on a variety of publicly available datasets. Initial qualitative results show our method produces visually more realistic expressions and facial action unit (AU) trajectories; initial quantitative results using image quality metrics (SSIM and NIQE) show the quality of our generated images is higher. Our approach and results are novel in the field of facial expression generation, with potential wider applications to other sequence generation tasks

    Synthesising Facial Macro- and Micro-Expressions Using Reference Guided Style Transfer

    Get PDF
    From MDPI via Jisc Publications RouterHistory: accepted 2021-08-06, pub-electronic 2021-08-11Publication status: PublishedLong video datasets of facial macro- and micro-expressions remains in strong demand with the current dominance of data-hungry deep learning methods. There are limited methods of generating long videos which contain micro-expressions. Moreover, there is a lack of performance metrics to quantify the generated data. To address the research gaps, we introduce a new approach to generate synthetic long videos and recommend assessment methods to inspect dataset quality. For synthetic long video generation, we use the state-of-the-art generative adversarial network style transfer method—StarGANv2. Using StarGANv2 pre-trained on the CelebA dataset, we transfer the style of a reference image from SAMM long videos (a facial micro- and macro-expression long video dataset) onto a source image of the FFHQ dataset to generate a synthetic dataset (SAMM-SYNTH). We evaluate SAMM-SYNTH by conducting an analysis based on the facial action units detected by OpenFace. For quantitative measurement, our findings show high correlation on two Action Units (AUs), i.e., AU12 and AU6, of the original and synthetic data with a Pearson’s correlation of 0.74 and 0.72, respectively. This is further supported by evaluation method proposed by OpenFace on those AUs, which also have high scores of 0.85 and 0.59. Additionally, optical flow is used to visually compare the original facial movements and the transferred facial movements. With this article, we publish our dataset to enable future research and to increase the data pool of micro-expressions research, especially in the spotting task

    DFUNet: Convolutional Neural Networks for Diabetic Foot Ulcer Classification

    Get PDF
    Globally, in 2016, 1 out of 11 adults suffered from diabetes mellitus. Diabetic foot ulcers (DFU) are a major complication of this disease, which if not managed properly can lead to amputation. Current clinical approaches to DFU treatment rely on patient and clinician vigilance, which has significant limitations, such as the high cost involved in the diagnosis, treatment, and lengthy care of the DFU. We collected an extensive dataset of foot images, which contain DFU from different patients. In this DFU classification problem, we assessed the two classes as normal skin (healthy skin) and abnormal skin (DFU). In this paper, we have proposed the use of machine learning algorithms to extract the features for DFU and healthy skin patches to understand the differences in the computer vision perspective. This experiment is performed to evaluate the skin conditions of both classes that are at high risk of misclassification by computer vision algorithms. Furthermore, we used convolutional neural networks for the first time in this binary classification. We have proposed a novel convolutional neural network architecture, DFUNet, with better feature extraction to identify the feature differences between healthy skin and the DFU. Using 10-fold cross validation, DFUNet achieved an AUC score of 0.961. This outperformed both the traditional machine learning and deep learning classifiers we have tested. Here, we present the development of a novel and highly sensitive DFUNet for objectively detecting the presence of DFUs. This novel approach has the potential to deliver a paradigm shift in diabetic foot care among diabetic patients, which represent a cost-effective, remote, and convenient healthcare solution

    Feasibility study of mobile phone photography as a possible outcome measure of systemic sclerosis-related digital lesions

    Get PDF
    Objective: Clinical trials assessing systemic sclerosis (SSc)-related digital ulcers have been hampered by a lack of reliable outcome measures of healing. Our objective was to assess the feasibility of patients collecting high-quality mobile phone images of their digital lesions as a first step in developing a smartphone-based outcome measure. Methods: Patients with SSc-related digital (finger) lesions photographed one or more lesions each day for 30 days using their smartphone and uploaded the images to a secure Dropbox folder. Image quality was assessed using six criteria: blurriness, shadow, uniformity of lighting, dot location, dot angle and central positioning of the lesion. Patients completed a feedback questionnaire. Results: Twelve patients returned 332 photographs of 18 lesions. Each patient sent a median of 29.5 photographs [interquartile range (IQR) 15-33.5], with a median of 15 photographs per lesion (IQR 6-32). Twenty-two photographs were duplicates. Of the remaining 310 images, 256 (77%) were sufficiently in focus; 268 (81%) had some shadow; lighting was even in 56 (17%); dot location was acceptable in 233 (70%); dot angle was ideal in 107 (32%); and the lesion was centred in 255 (77%). Patient feedback suggested that 6 of 10 would be willing to record images daily in future studies, and 9 of 10 at least one to three times per week. Conclusion: Taking smartphone photographs of digital lesions was feasible for most patients, with most lesions in focus and central in the image. These promising results will inform the next research phase (to develop a smartphone monitoring application incorporating photographs and symptom tracking)

    Clonal Structure of Rapid-Onset MDV-Driven CD4+ Lymphomas and Responding CD8+ T Cells

    Get PDF
    Lymphoid oncogenesis is a life threatening complication associated with a number of persistent viral infections (e.g. EBV and HTLV-1 in humans). With many of these infections it is difficult to study their natural history and the dynamics of tumor formation. Marek's Disease Virus (MDV) is a prevalent α-herpesvirus of poultry, inducing CD4+ TCRαβ+ T cell tumors in susceptible hosts. The high penetrance and temporal predictability of tumor induction raises issues related to the clonal structure of these lymphomas. Similarly, the clonality of responding CD8 T cells that infiltrate the tumor sites is unknown. Using TCRβ repertoire analysis tools, we demonstrated that MDV driven CD4+ T cell tumors were dominated by one to three large clones within an oligoclonal framework of smaller clones of CD4+ T cells. Individual birds had multiple tumor sites, some the result of metastasis (i.e. shared dominant clones) and others derived from distinct clones of transformed cells. The smaller oligoclonal CD4+ cells may represent an anti-tumor response, although on one occasion a low frequency clone was transformed and expanded after culture. Metastatic tumor clones were detected in the blood early during infection and dominated the circulating T cell repertoire, leading to MDV associated immune suppression. We also demonstrated that the tumor-infiltrating CD8+ T cell response was dominated by large oligoclonal expansions containing both “public” and “private” CDR3 sequences. The frequency of CD8+ T cell CDR3 sequences suggests initial stimulation during the early phases of infection. Collectively, our results indicate that MDV driven tumors are dominated by a highly restricted number of CD4+ clones. Moreover, the responding CD8+ T cell infiltrate is oligoclonal indicating recognition of a limited number of MDV antigens. These studies improve our understanding of the biology of MDV, an important poultry pathogen and a natural infection model of virus-induced tumor formation

    Forensic microbiology reveals that Neisseria animaloris infections in harbour porpoises follow traumatic injuries by grey seals

    Get PDF
    Neisseria animaloris is considered to be a commensal of the canine and feline oral cavities. It is able to cause systemic infections in animals as well as humans, usually after a biting trauma has occurred. We recovered N. animaloris from chronically inflamed bite wounds on pectoral fins and tailstocks, from lungs and other internal organs of eight harbour porpoises. Gross and histopathological evidence suggest that fatal disseminated N. animaloris infections had occurred due to traumatic injury from grey seals. We therefore conclude that these porpoises survived a grey seal predatory attack, with the bite lesions representing the subsequent portal of entry for bacteria to infect the animals causing abscesses in multiple tissues, and eventually death. We demonstrate that forensic microbiology provides a useful tool for linking a perpetrator to its victim. Moreover, N. animaloris should be added to the list of potential zoonotic bacteria following interactions with seals, as the finding of systemic transfer to the lungs and other tissues of the harbour porpoises may suggest a potential to do likewise in humans

    Localization of type 1 diabetes susceptibility to the MHC class I genes HLA-B and HLA-A

    Get PDF
    The major histocompatibility complex (MHC) on chromosome 6 is associated with susceptibility to more common diseases than any other region of the human genome, including almost all disorders classified as autoimmune. In type 1 diabetes the major genetic susceptibility determinants have been mapped to the MHC class II genes HLA-DQB1 and HLA-DRB1 (refs 1-3), but these genes cannot completely explain the association between type 1 diabetes and the MHC region. Owing to the region's extreme gene density, the multiplicity of disease-associated alleles, strong associations between alleles, limited genotyping capability, and inadequate statistical approaches and sample sizes, which, and how many, loci within the MHC determine susceptibility remains unclear. Here, in several large type 1 diabetes data sets, we analyse a combined total of 1,729 polymorphisms, and apply statistical methods - recursive partitioning and regression - to pinpoint disease susceptibility to the MHC class I genes HLA-B and HLA-A (risk ratios >1.5; Pcombined = 2.01 × 10-19 and 2.35 × 10-13, respectively) in addition to the established associations of the MHC class II genes. Other loci with smaller and/or rarer effects might also be involved, but to find these, future searches must take into account both the HLA class II and class I genes and use even larger samples. Taken together with previous studies, we conclude that MHC-class-I-mediated events, principally involving HLA-B*39, contribute to the aetiology of type 1 diabetes. ©2007 Nature Publishing Group
    • …
    corecore